Goto

Collaborating Authors

 screen recognition


Making Mobile Applications Accessible with Machine Learning

#artificialintelligence

At Apple we use machine learning to teach our products to understand the world more as humans do. Of course, understanding the world better means building great assistive experiences. Machine learning can help our products be intelligent and intuitive enough to improve the day-to-day experiences of people living with disabilities. We can build machine-learned features that support a wide range of users including those who are blind or have low vision, those who are deaf or are hard of hearing, those with physical motor limitations, and also support those with cognitive disabilities. Mobile devices and their apps have become ubiquitous.


CMU, Apple Team Improves iOS App Accessibility

CMU School of Computer Science

A team at Apple analyzed nearly 78,000 screenshots from more than 4,000 apps to improve the screen reader function on its mobile devices. The result was Screen Recognition, a tool that uses machine learning and computer vision to automatically detect and provide content readable by VoiceOver for apps that would otherwise not be accessible. Jason Wu, a Ph.D. student in Carnegie Mellon University's Human-Computer Interaction Institute (HCII), was part of the team, whose work, "Screen Recognition: Creating Accessibility Metadata for Mobile Applications From Pixels," won a Best Paper award at the recent Association for Computing Machinery (ACM) Computer-Human Interaction (CHI) conference. His advisor, Jeffrey Bigham, an associate professor in HCII and the Language Technologies Institute and head of the Human-Centered Machine Learning Group at Apple, was also among the paper's authors. Apple's VoiceOver uses metadata supplied by ad developers that describes user interface components.